When Is There a Free Matrix Lunch?

نویسنده

  • Manfred K. Warmuth
چکیده

The “no-free lunch theorems” essentially say that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower expected loss than B as vice-versa. This can be made precise for certain loss functions [WM97]. This note concerns itself with cases where seemingly harder matrix versions of the algorithms have the same on-line loss bounds as the corresponding vector versions. So it seems that you get a free “matrix lunch” (Our title is however not meant to imply that we have a technical refutation of the no-free lunch theorems). The simplest case of this phenomenon occurs in the so-called expert setting. We have n experts. In each trial the algorithm proposes a probability vector w over the n experts, receives a loss vector ` ∈ [0, 1] for the experts and incurs an expected loss w ·`. The Weighted Majority or Hedge algorithm uses exponential weights w i ∼ w i e−η Pt−1 t=1 ` t i and has the following expected loss bound [FS97]:

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Martingale Measures for Discrete Time Processes with Infinite Horizon

Let (St)t2I be an IR {valued adapted stochastic process on ( ;F ; (Ft)t2I ; P ). A basic problem, occuring notably in the analysis of securities markets, is to decide whether there is a probability measure Q on F equivalent to P such that (St)t2I is a martingale with respect to Q. It is known since the fundamental papers of Harrison{Kreps (79), Harrison{Pliska(81) and Kreps(81) that there is an...

متن کامل

No-Free-Lunch theorems in the continuum

No-Free-Lunch Theorems state, roughly speaking, that the performance of all search algorithms is the same when averaged over all possible objective functions. This fact was precisely formulated for the first time in a now famous paper by Wolpert and Macready, and then subsequently refined and extended by several authors, always in the context of a set of functions with discrete domain and codom...

متن کامل

How to Get a Free Lunch: A Simple Cost Model for Machine Learning Applications

This paper proposes a simple cost model for machine learning applications based on the notion of net present value. The model extends and unifies the models used in (Pazzani et al., 1994) and (Masand & Piatetsky-Shapiro, 1996). It attempts to answer the question "Should a given machine learning system now in the prototype stage be fielded?" The model’s inputs are the system’s confusion matrix, ...

متن کامل

Resident work hours: is there such a thing as a free lunch?

International Journal of Purchasing and Materials Management © Copyright January 1996, by the National Association of Purchasing Management, Inc. Analysis of data from 179 respondents indicated that the concept of the business lunch is alive and well among both purchasing and sales professionals. Neither the purchasing nor sales professionals believed that there was a possibility for a high lev...

متن کامل

No Free Lunch for Noise Prediction

No-free-lunch theorems have shown that learning algorithms cannot be universally good. We show that no free funch exists for noise prediction as well. We show that when the noise is additive and the prior over target functions is uniform, a prior on the noise distribution cannot be updated, in the Bayesian sense, from any finite data set. We emphasize the importance of a prior over the target f...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007